The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
DCOP algorithms usually rely on interaction graphs to operate. In open and dynamic environments, such methods need to address how this interaction graph is generated and maintained among agents. Existing methods require reconstructing the entire graph upon detecting changes in the environment or assuming that new agents know potential neighbors to facilitate connection. We propose a novel distributed interaction graph construction algorithm to address this problem. The proposed method does not assume a predefined constraint graph and stabilizes after disruptive changes in the environment. We evaluate our approach by pairing it with existing DCOP algorithms to solve several generated dynamic problems. The experiment results show that the proposed algorithm effectively constructs and maintains a stable multi-agent interaction graph for open and dynamic environments.
translated by 谷歌翻译
Existing Cross Modal Hashing (CMH) methods are mainly designed for balanced data, while imbalanced data with long-tail distribution is more general in real-world. Several long-tail hashing methods have been proposed but they can not adapt for multi-modal data, due to the complex interplay between labels and individuality and commonality information of multi-modal data. Furthermore, CMH methods mostly mine the commonality of multi-modal data to learn hash codes, which may override tail labels encoded by the individuality of respective modalities. In this paper, we propose LtCMH (Long-tail CMH) to handle imbalanced multi-modal data. LtCMH firstly adopts auto-encoders to mine the individuality and commonality of different modalities by minimizing the dependency between the individuality of respective modalities and by enhancing the commonality of these modalities. Then it dynamically combines the individuality and commonality with direct features extracted from respective modalities to create meta features that enrich the representation of tail labels, and binaries meta features to generate hash codes. LtCMH significantly outperforms state-of-the-art baselines on long-tail datasets and holds a better (or comparable) performance on datasets with balanced labels.
translated by 谷歌翻译
尖峰神经网络(SNNS)模仿大脑计算策略,并在时空信息处理中表现出很大的功能。作为人类感知的基本因素,视觉关注是指生物视觉系统中显着区域的动态选择过程。尽管视觉注意力的机制在计算机视觉上取得了巨大成功,但很少会引入SNN中。受到预测注意重新映射的实验观察的启发,我们在这里提出了一种新的时空通道拟合注意力(SCTFA)模块,该模块可以通过使用历史积累的空间通道信息来指导SNN有效地捕获潜在的目标区域。通过在三个事件流数据集(DVS手势,SL-Animals-DVS和MNIST-DVS)上进行系统评估,我们证明了带有SCTFA模块(SCTFA-SNN)的SNN不仅显着超过了基线SNN(BL-SNN)(BL-SNN)(BL-SNN)以及其他两个具有退化注意力模块的SNN模型,但也通过现有最新方法实现了竞争精度。此外,我们的详细分析表明,所提出的SCTFA-SNN模型对噪声和出色的稳定性具有强大的稳健性,同时保持了可接受的复杂性和效率。总体而言,这些发现表明,适当纳入大脑的认知机制可能会提供一种有希望的方法来提高SNN的能力。
translated by 谷歌翻译
在卷积神经网络(CNN)的动力下,医学图像分类迅速发展。由于卷积内核的接受场的固定尺寸,很难捕获医学图像的全局特征。尽管基于自发的变压器可以对远程依赖性进行建模,但它具有很高的计算复杂性,并且缺乏局部电感偏见。许多研究表明,全球和本地特征对于图像分类至关重要。但是,医学图像具有许多嘈杂,分散的特征,类内的变化和类间的相似性。本文提出了三个分支分层的多尺度特征融合网络结构,称为医学图像分类为新方法。它可以融合多尺度层次结构的变压器和CNN的优势,而不会破坏各自的建模,从而提高各种医学图像的分类精度。局部和全局特征块的平行层次结构旨在有效地提取各种语义尺度的本地特征和全局表示,并灵活地在不同的尺度上建模,并与图像大小相关的线性计算复杂性。此外,自适应分层特征融合块(HFF块)旨在全面利用在不同层次级别获得的功能。 HFF块包含空间注意力,通道注意力,残留的倒置MLP和快捷方式,以在每个分支的各个规模特征之间适应融合语义信息。我们在ISIC2018数据集上提出的模型的准确性比基线高7.6%,COVID-19数据集的准确性为21.5%,Kvasir数据集的准确性为10.4%。与其他高级模型相比,HIFUSE模型表现最好。我们的代码是开源的,可从https://github.com/huoxiangzuo/hifuse获得。
translated by 谷歌翻译
从X射线冠状动脉造影(XCA)图像序列中提取对比度的血管对于直觉诊断和治疗具有重要的临床意义。在这项研究中,XCA图像序列O被认为是三维张量输入,血管层H是稀疏张量,而背景层B是低级别张量。使用张量核标准(TNN)最小化,提出了一种基于张量的强稳定主成分分析(TRPCA)的新型血管层提取方法。此外,考虑了血管的不规则运动和周围无关组织的动态干扰,引入了总变化(TV)正规化时空约束,以分离动态背景E。 - 阶段区域生长(TSRG)方法用于血管增强和分割。全局阈值分割用作获得主分支的预处理,并使用ra样特征(RLF)滤波器来增强和连接破碎的小段,最终的容器掩模是通过结合两个中间结果来构建的。我们评估了TV-TRPCA算法的前景提取的可见性以及TSRG算法在真实临床XCA图像序列和第三方数据库上的血管分割的准确性。定性和定量结果都验证了所提出的方法比现有的最新方法的优越性。
translated by 谷歌翻译
重要性采样(IS)是非政策评估中的一种流行技术,它重新赋予了重播缓冲液中轨迹的回归以提高样本效率。但是,对IS进行培训可能是不稳定的,以前试图解决此问题的尝试主要集中于分析IS的差异。在本文中,我们揭示了不稳定性与IS的重复使用偏见的新概念有关 - 由重复使用缓冲液重用进行评估和优化引起的非政策评估偏差。从理论上讲,我们证明了对当前策略的非政策评估和优化,并通过重播缓冲区的数据导致目标高估,这可能会导致错误的梯度更新并退化性能。我们进一步提供了重复使用偏差的高概率上限,并表明控制上限的一个项可以通过引入非政策算法的稳定性概念来控制重复使用偏置。基于这些分析,我们最终提出了一种新颖的偏见调查重要性抽样(BIRIS)框架以及实际算法,可以减轻重复使用偏见的负面影响。实验结果表明,我们基于BIRIS的方法可以显着提高一系列连续控制任务的样品效率。
translated by 谷歌翻译
从RGB-D图像中对刚性对象的6D姿势估计对于机器人技术中的对象抓握和操纵至关重要。尽管RGB通道和深度(d)通道通常是互补的,分别提供了外观和几何信息,但如何完全从两个跨模式数据中完全受益仍然是非平凡的。从简单而新的观察结果来看,当对象旋转时,其语义标签是姿势不变的,而其关键点偏移方向是姿势的变体。为此,我们提出了So(3)pose,这是一个新的表示学习网络,可以探索SO(3)equivariant和So(3) - 从深度通道中进行姿势估计的特征。 SO(3) - 激素特征有助于学习更独特的表示,以分割来自RGB通道外观相似的对象。 SO(3) - 等级特征与RGB功能通信,以推导(缺失的)几何形状,以检测从深度通道的反射表面的对象的关键点。与大多数现有的姿势估计方法不同,我们的SO(3) - 不仅可以实现RGB和深度渠道之间的信息通信,而且自然会吸收SO(3) - 等级的几何学知识,从深度图像中,导致更好的外观和更好的外观和更好几何表示学习。综合实验表明,我们的方法在三个基准测试中实现了最先进的性能。
translated by 谷歌翻译
高信心重叠的预测和准确的对应关系对于以部分到派对方式对齐成对点云至关重要。但是,重叠区域和非重叠区域之间存在固有的不确定性,这些区域一直被忽略并显着影响注册绩效。除了当前的智慧之外,我们提出了一种新颖的不确定性意识到的重叠预测网络,称为Utopic,以解决模棱两可的重叠预测问题。据我们所知,这是第一个明确引入重叠不确定性以指向云注册的人。此外,我们诱导特征提取器通过完成解码器隐式感知形状知识,并为变压器提供几何关系嵌入,以获得转换 - 不变性的几何形状感知特征表示。凭借更可靠的重叠得分和更精确的密度对应关系的优点,即使对于有限的重叠区域的输入,乌托邦也可以实现稳定而准确的注册结果。关于合成和实际基准的广泛定量和定性实验证明了我们的方法优于最先进的方法。代码可从https://github.com/zhileichen99/utopic获得。
translated by 谷歌翻译
图形神经网络(GNN)是具有无核数据的应用的有前途的方法。但是,具有数亿节点的大规模图上的培训GNN既是资源又是耗时的。与DNN不同,GNN通常具有更大的内存足迹,因此GPU内存能力和PCIE带宽是GNN培训中的主要资源瓶颈。为了解决此问题,我们提出分叉:一种图形量化方法,通过显着减少内存足迹和PCIE带宽要求来加速GNN训练,以便GNN可以充分利用GPU计算功能。我们的关键见解是,与DNN不同,GNN不太容易发生量化引起的输入特征的信息丢失。我们确定图形特征量化中的主要准确性影响因素,从理论上证明,分叉训练会收敛到网络,在该网络中,损失在未压缩网络的最佳损失的$ \ epsilon $之内。我们使用几种流行的GNN模型和数据集对分叉进行了广泛的评估,包括最大的公共图数据集MAG240M上的图形。结果表明,分叉达到30以上的压缩率,并在边际准确性损失的情况下提高了GNN训练速度200%-320%。特别是,分叉在一小时内仅使用四个GPU在MAG240M上的训练图来实现记录。
translated by 谷歌翻译